24 research outputs found

    顔の表情に基づいた感情と行動を表出するシステムに関する研究

    Get PDF
    九州工業大学博士学位論文 学位記番号:情工博甲第320号 学位授与年月日:平成29年3月24日1 Introduction|2 Configuration of CONBE Robot System|3 Animal-like Behavior of CONBE Robot using CBA|4 Emotion Generating System of CONBE Robot|5 Experiment and discussion|6 Conclusions九州工業大学平成28年

    Robot Assisting Water Serving to Disabilities by Voice Control

    Get PDF
    ROS is an open-source robot operating system. In this paper, we use ROS to control Conbe robot arm. By introducing the YOLACT real-time instance segmentation, we trained our own model for Object Detection. Secondly, the Speech-Recognition system is established through Deep speech and Mozilla Text-To-Speech with Tacotron2 DDC model. Deep speech is an end-to-end speech system, where deep learning supersedes these processing stages. Combined with a language model, this approach achieves higher performance than traditional methods on hard speech recognition tasks while also being much simpler. In this way, we create an artificial intelligence, which accomplished a simple conversation with people. And the voice control system is established based on Speech-Recognition system. In the experiment, we successfully control the robot arm move positions and do water serving for disabilities by voice command. With this research, voice control robot arm can be apply in the life support area, it will be more convenient for disabilities in daily life.The 2021 International Conference on Artificial Life and Robotics (ICAROB 2021), January 21-24, 2021, Higashi-Hiroshima (オンライン開催に変更

    Deep Learning Methods for Semantic Segmentation of Dense 3D SLAM Maps

    Get PDF
    Most real-time SLAM systems can only achieve semi-dense mapping, and the robot lacks specific knowledge of the mapping results, so it can only achieve simple positioning and obstacle avoidance, which may be used as an obstacle in the face of the target object to be grasped, thus affecting the realization of motion planning. The use of semantic segmentation in dense SLAM maps allows the robot to better understand the map information, distinguish the meaning of different blocks in the map by semantic labels, and achieve fast feature matching and Loop Closure Detection based on the relationship between semantic labels in the scene. There are many semantic segmentation datasets based on street scenes and indoor scenes available for use, and these datasets have some common tags. Based on these training data, we can derive a semantic segmentation model based on RGB images by using the Pytorch platform for training.The 2021 International Conference on Artificial Life and Robotics (ICAROB 2021), January 21-24, 2021, Higashi-Hiroshima (オンライン開催に変更

    Anomaly Detection in Time Series Data Using Support Vector Machines

    Get PDF
    Analysis of large data sets is increasingly important in business and scientific research. One of the challenges in such analysis stems from uncertainty in data, which can produce anomalous results. In this paper, we propose a method of anomaly detection in time series data using a Support Vector Machine. Three different kernels of the Support Vector Machine are analyzed to predict anomalies in the UCR public data set. Comparison of the three kernels shows that the defined parameter values of the RBF kernel are critical for improving the validity and accuracy in anomaly detection. Our results show that the RBF kernel of the Support Vector Machine can be used to advantage in detecting anomalies.The 2021 International Conference on Artificial Life and Robotics (ICAROB 2021), January 21-24, 2021, Higashi-Hiroshima (オンライン開催に変更

    Grasping Motion for Small Non-Rigid Food Using Instance Semantic Segmentation

    Get PDF
    The importance of food automation becomes a challenge in this era since food is an essential factor for humans. In this paper, we designed the robot framework to autonomously generate grasping motion for non-rigid food objects. The system could recognize and localize objects and target regions for pick-and-place rather than only the position. Assembling a lunch box with different food required advance in both hardware and software to create an efficient process. The robot platform based on a seven-axis industrial robot arm equipped with instance segmentation based on Cascade Mask R-CNN for Japanese food. A modular end effector was designed and prototyped to facilitate soft gripper and vacuum pad on the single unit, which allows the system to handle different food objects. In the experiment, we also evaluated the performance of pick-and-place process. The system can successfully pick-and-place food into a lunch box with an outcome successful rate of 90%.The Society of Instrument and Control Engineers (SICE) Annual Conference 2020 (SICE2020), September 23-26, 2020, Chiang Mai, Thailand (新型コロナ感染拡大に伴い、現地開催中止

    Anomaly Detection using Variational Autoencoder with Spectrum Analysis for Time Series Data

    Get PDF
    Uncertainty is an ever present challenge in life. To meet this challenge in data analysis, we propose a method for detecting anomalies in data. This method, based in part on Variational Autoencoder, identifies spiking raw data by means of spectrum analysis. Time series data are examined in the frequency domain to enhance the detection of anomalies. In this paper, we have used the standard data sets to validate the proposed method. Experimental results show that the comparison of the frequency domain with the original data for anomaly detection can improve validity and accuracy on all criteria. Therefore, analysis of time series data by combining Variational Autoencoder and frequency domain spectrum methods can effectively detect anomalies. Contribution- We have proposed an anomaly detection method based on the time series data analysis by combining Variational Autoencoder and Spectrum analysis, and have benchmarked the method with reference to recent related research.10th International Conference on Informatics, Electronics, and Vision (ICIEV20), 26-29 August, 2020, Kitakyushu, Japa

    Real-Time Instance Segmentation and Point Cloud Extraction for Japanese Food

    Get PDF
    Innovation in technology is playing an important role in the development of food industry, as is evidenced by the growing number of food review and food delivery applications. Similarly, it is expected that the process of producing and packaging food itself will become increasingly automated through the use of a robotic system. The shift towards food automation would help ensure quality control of food products and improve production line efficiency, leading to reduced cost and higher profit margin for restaurants and factories. One key enabler for such automated system is the ability to detect and classify food object with great accuracy and speed. In this study, we explore real-time food object segmentation using stereo depth sensing camera mounted on a robotic arm system. Instance segmentation on Japanese food dataset is used to classify food objects at a pixel-level using Cascade Mask R-CNN deep learning model. Additionally, depth information from the sensor is extracted to generate a 3D point cloud of the food object and its surroundings. When combined with the segmented 2D RGB image, a segmented 3D point cloud of the food object can be constructed, which will help facilitate food automation operation such as precision grasping of food object with numerous shapes and sizes.The Society of Instrument and Control Engineers (SICE) Annual Conference 2020 (SICE2020), September 23-26, 2020, Chiang Mai, Thailand (新型コロナ感染拡大に伴い、現地開催中止

    Robot Motion Generation by Hand Demonstration

    Get PDF
    Since traditional robot teaching requires time and instruction to the robot motion, we present a systematic framework based on deep learning and experiment for generating robot motion trajectories from human hand demonstration. In this system, the worker could teach robot easier rather than assigning the instruction to the robot controller manually. Therefore, the robot can imitate the action in a new situation instead of directly teaching the robot arm. Our contributions include three points 1) the real-time extracting method of hand movement without marker using hand detection in 3D from human 2) the motion generalization of the hand trajectories from human 3) Robot path planning for grasping and place the object to the target. We also present the experiment conducted by the user movement for real data and evaluate the system using the manipulator robot. The investigation shows the pick-and-place task of the robot for food by hand demonstration.The 2021 International Conference on Artificial Life and Robotics (ICAROB 2021), January 21-24, 2021, Higashi-Hiroshima (オンライン開催に変更

    Robot Motion and Grasping for Blindfold Handover

    Get PDF
    Autonomous robots in human-robot interaction (HRI) recently are becoming part of human life as the number of services or personal robots increasingly used in our home. In order to fulfill the gap of HRI merit, we would like to propose a system of the autonomous robot motion and grasping creation for assisting the disabled person such as the blind people for handover tasks to help blind people in pick and place tasks. In this paper, we develop the robot motion to receive the object by handing over from the blindfold human to represent the blindness. To determine the target of the object and human hand, we implement the 6DOF pose detection using a point cloud and hand detection using the Single Shot Detection model in Deep learning for planning motion using 9DOF arm robot with hand. We finally experiment and evaluate the tasks from blindfold-robot handover tasks
    corecore